Talk by Lav Varshney from the University of Illinois, Urbana-Champaign. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Abstract Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively such as hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i.e., the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks. In closing we discuss the use of associative algorithms in computational creativity, as well as related faulty graph-based inference algorithms for decoding error-correcting codes and for reconstructing visual receptive fields and cortical connectomes.